Interior Point Methods for Nondiierentiable Optimization

نویسندگان

  • Jean-Louis GOFFIN
  • Jean-Philippe VIAL
چکیده

We describe the analytic center cutting plane method and its relationship to classical methods of nondi erentiable optimization and column generation. Implementation issues are also discussed, and current applications listed. Keywords Projective Algorithm, Analytic Center, Cutting Plane Method. This work has been completed with support from the Fonds National Suisse de la Recherche Scienti que, grant 12-42503.94, from the Natural Sciences and Engineering Research Council of Canada, grant number OPG0004152 and from the FCAR of Quebec. GERAD/Faculty of Management, McGill University, 1001, Sherbrooke West, Montreal, Que., H3A 1G5, Canada. E-mail: [email protected]. LOGILAB/Management Studies, University of Geneva, 102, Bd Carl-Vogt, CH-1211 Gen eve 4, Switzerland. E-mail: [email protected]. 1 1 Introduction Nondi erentiable convex optimization may be deemed an arcane topic in the eld of optimization. Truly enough, many a times problems that are formulated as nondi erentiable optimization problems can be reformulated in a smooth format. For instance, the typical NDO problem min x2Xmax t2T ft(x); where X Rn, T is an arbitrary nite set, and ft(x) is a convex function in x for all t 2 T , can be written as the convex smooth nonlinear programming problem min x2X fz j z ft(x); 8t 2 Tg : Another possibility is to use smoothing techniques which essentially replace the real function f(x) = maxf0; xg by the function f(x; ) = 8><>: 0 if x ; (x+ )2 4 if < x ; x otherwise: Transformations such as these often work well. However, there are cases where they are not appropriate and the user must face the true nondi erentiable aspect of the problem. This is particularly so if the set T has exponential or in nite cardinality. Later, we shall discuss other well-known examples pertaining to the eld of large scale programming or to semi-in nite programming. It is also well-known that standard methods for smooth programming may completely fail when applied to non-smooth problems. (See Wolfe's example [39] describing the failure of the extension of the steepest descent method.) Nondi erentiable optimization grew and prospered as a branch of optimization in its own right and many proposals have been made to meet this di cult and challenging problem. We shall also review them, though very brie y. A major breakthrough in the eld of optimization was the recent contribution of Karmarkar and followers to interior point methods. Interior Point Methods, or IPM's, were rst applied to linear programming, and then to structural programming [30], i.e., to that branch of convex programming dealing with self-concordant functions and self-concordant barriers [31]. In that theory, the concept of analytic centers plays a key role. Although the concept was put forth by [22] in its celebrated linearized method of centers, it was only many years later that its use in optimization was suggested by [37]. Several authors realized the potential of this concept in relation with the well-known cutting plane approach to nondi erentiable optimization [40, 12] or mixed integer programming [27]. The purpose of this survey is to review the developments and applications of the so-called analytic center cutting plane method. 2 Nondi erentiable problems The canonical convex problem can be cast as: min ff(x) j x 2 Q \Q0g; 2 where Q Rn is a closed convex set, Q0 Rn is a compact convex set, f : Rn 7! R is a convex function. We will assume that the set Q0 is de ned explicitly and is endowed with a self{concordant barrier, and that iterates remain in the interior of that set, while the function f and the set Q are de ned by the following oracle. Given x 2 intQ0, the oracle answers either: 1. x is in Q and there is a support vector 2 @f( x), i.e., f(x) f( x) + h ; x xi 8x 2 Rn: 2. x is not in Q and there is a separation vector a, k a k= 1 such that ha; x xi ; 0; 8x 2 Q \Q0: Nondi erentiable optimization problems arise in the following context: an initial large scale optimization problem is transformed via a decomposition technique (Benders decomposition or Lagrangian relaxation) into an equivalent nondi erentiable problem of much smaller dimension, but with an exponential number of constraints. Other related elds are variational inequalities and semi-in nite programming. Let us now review these cases in greater detail. Lagrangian relaxation applies to the following problem min ff(x) j g(x) 0; x 2 Xg, where f is convex and g is vector real-valued function whose individual components are convex. The set X is arbitrary. We introduce the Lagrangian L(x; u) = f(x) + hu; g(x)i: Weak duality always holds: min x2Xmax u 0 L(x; u) max u 0 min x2X L(x; u); with equality (strong duality) if X is convex and classical regularity conditions are satis ed. It is easily recognized that the left-hand side of the equation is the optimal value of our original problem, while the right-hand side de nes the so-called dual. In some cases that we shall review, the dual problem may be easier to solve. It then produces a lower bound for the optimal value. Moreover, under the appropriate regularity conditions guaranteeing the absence of a duality gap, the primal and dual optimal values coincide. Let us introduce the function L(u) = min x2X L(x; u). It is well-known that L(u) is concave and nondi erentiable. Besides if x 2 X is such that L( u) = L( x; u), then L(u) satis es the subgradient inequality L(u) L( u) + hg( x); u ui; 8u 0: The Lagrangian relaxation is attractive if the problem min x2X L(x; u) is easy to solve and u is of moderate size. A typical situation where this property holds is when X is a Cartesian product and L(x; u) is separable on this product space. The case where X is not convex, as, for instance, including integrality constraints, has led to a host of very successful applications. 3 Benders decomposition deals with the problem min f(x) + g(y) h(x) + k(y) 0; x 2 X; y 2 Y; where X Rn is an arbitrary set, Y Rp is convex, f : X 7! R, g : Y 7! R is convex, h : Rn 7! Rm is convex and k : Y 7! Rm convex. (Convexity of k and h is to be understood as convexity of each component of those vector-valued functions.) Finally, for the sake of simplicity, we assume that g; h and k are continuously di erentiable. Let us introduce the function Q(x) = min f g(y) j k(y) h(x); y 2 Y g : (2.1) Q may take in nite values, is convex and usually nondi erentiable. If we assume that the problem de ning Q has a nite optimum and satis es the usual regularity conditions, then, if u is an optimal multiplier in the de nition of Q, Q(x) Q( x) + hDh( x)T u; x xi; 8x 2 X; where, Dh( x) is the Jacobian of h at x. The equivalent nondi erentiable problem, under the assumption that Q is nite, is minx2Xff(x) + Q(x)g, where the value of Q and of one subgradient is computed by the solution of 2.1. Feasibility cuts are introduced to account for the x values for which the problem 2.1 is infeasible. Semi-in nite convex programming pertains to the class of problems min f f(x) j g(x; t) 0; t 2 T g ; where f and g convex in x, and T is a set of in nite or exponential cardinality. The oracle at the point x nds a point t 2 T such that g( x; t) > 0, and generates the \cut" g( x; t) + hg0 x( x; t); x xi 0. The cut might be computed by solving max t2T g( x; t). This is often achieved by dynamic programming (for example in the cutting stock problem), or by global optimization. Variational inequalities Let H(x) be a multi-valued monotone operator de ned on a compact convex set Q Rn. The variational inequality problem is de ned as follows: nd x 2 Q : hhx; x x i 0; 8x 2 Q; hx 2 H(x): The so-called dual gap function is: (x) = max u2Qfhhu; x ui j hu 2 H(u)g. If we let X be the set of solutions of the variational inequality, then: (x) 0 is a closed convex function, and f (x) = 0g () fx 2 X g. If the variational inequality is maximal monotone [25], then it can be shown that if x 62 X an oracle separating x from X is hhx; x ui 0; 8u 2 X ; for any hx 2 H(x). This clearly reduces the variational inequality problem into a feasibility problem given by the separation oracle hx. 4 3 Solution methods Some known methods The various methods can be classi ed as 1Descent-like methods, such as subgradient optimization, the bundle method, with its variant the level set approach [25], 2the volume reduction approaches exempli ed by the ellipsoid method, 3the Kelley-Goldstein-Cheney standard cutting plane, which is similar by duality to DantzigWolfe column generation and closely related to Benders decomposition, and 4the central cutting plane methods such as Levin's center of gravity, the maximum inscribed ellipsoid, the volumetric center (these three methods also have the volume reduction property) and, lastly, the analytic center. A generic cutting plane scheme For simplicity, assume that the problem is de ned as minfhb; xi j x 2 C \Xg with X = fx j e x eg is the unit cube and the set C is de ned by a separation oracle: If x 2 C, introduce the optimality cut hb; x xi 0. If x 62 C, introduce the feasibility cut ha; xi c. The set of localization in the space of the x variable is Fk = X \ nx j hai; xi ci; i = 1; ; k 1o ; it is known to contain the optimal solution. The set of localization is, in the space (x; z) of the epigraph: Gk = n(x; z) j x 2 X; hb; xi z; hai; xi ci; i = 1; ; k 1o ; where z is the best recorded objective value (it is also known to contain the optimal solution). The k-th step of the generic cutting plane algorithm is as follows. 1. Pick xk 2 Fk. 2. The oracle returns hak; xi ck: 3. Update Fk+1 := Fk \ nx j hak; xi cko : Speci c cutting plane algorithms di er in the choice of the query point xk. Let us focus on two strategies. Kelley's strategy : Select x = argminfz j z 2 Fkg. The analytic center strategy : Select x = argminfF0(x) k Xi=1 log(ci hai; xi)g; 5 where F0(x) = n P i=1 log(xi 1)(1 xi) is the barrier function corresponding to the unit cube X. The analytic center method could also be de ned in the space of the epigraph, using the analytic center of the set Gk. Let us enumerate few distinctive features of the two strategies. Kelley's strategy takes the polyhedral approximation Fk as a \reliable" description of the problem and sets the query at the best point of the localization set. It is easily seen that constrained optima will always be approached from the exterior. In such a case the method does not possess a natural stopping criterion. The method su ers from another aw, probably more troublesome. Recall that the oracle returns a cutting hyperplane that separates the query point from the solution set. In the limit, the cutting plane may pass through the query point, as it is the case with variational inequalities. Since the query point is an extreme point of the localization set in Kelley's approach, the next iteration may select again the same query point, and the method stalls. An illuminating example that con rms the possible slow behavior of Kelley's cutting plane method is fully described in [28]. The motivation for choosing central query point is very much to obviate the above bad feature of Kelley's strategy. Since centers never touch the boundary of the localization set, the query points are never repeated. Besides, the query point will eventually fall within the feasible set, if the latter has a non-empty interior. At such point the objective may be evaluated, thus allowing a bracketing of the optimal value of the problem. Centers have also the interesting feature of being moderately a ected by the introduction of new cuts. As a result, central cutting plane achieve a kind of regularization that stabilizes the algorithm. In the sequel we shall focus on a speci c central cutting plane method based on analytic centers. We shall name it accpm which is the acronym for Analytic Center Cutting Plane Method. Characterization of analytic centers of a polytope Let A be a n m matrix and c be an m vector. For the sake of more compact notation, let us denote the localization set as FD = n(x; s) : ATx+ s = c; s 0o. The analytic center is the solution of the problem minf'D(s) = m Xi=1 log si j (x; s) 2 FDg: We associate to it the dual problem minf'P (y) = m Xi=1 log yi j y 2 FP g; where FP = ny : Ay = 0; cT y = m; y 0o. The two problems have the same rst order optimality conditions : ys = e ATx+ s = c; s > 0 Ay = 0; y > 0: This KKT system has a unique solution ((xc; sc); yc); besides 'P (y) + 'D(s) 0, for all feasible pairs and f'P (y) + 'D(s) = 0g () fy = yc; s = scg : 6 Re-entry direction for a central cut A current (approximate) analytic center is de ned: AT x+ s = c; s > 0 A y = 0; y > 0 k e y s k < 1: The new dual polytope, given after adding the answer of the oracle queried at x is: ~ FD = nx : ~ ATx ~ co ; with ~ A = (A; a) and ~ c = c ha; xi . The question critical to the e ciency of the method of analytic centers is: where to start the search for the next analytic center? The theory of IPM's provides the necessary tools to answer. Indeed, so-called Dikin ellipsoid nx : k Y AT (x x) k 1 o is included in the localization set FD and is centered at the approximate center. This is the so-called Dikin ellipsoid. Other scalings, e.g. X 1 or Y 1=2 X 1=2 may be chosen depending on the type of algorithm, primal, dual of primal-dual, that is used to compute the analytic center. In this presentation we focus on the primal algorithm, that is, on iterations in the y variables, i.e., in the primal (Karmarkar's) space. Dikin's ellipsoid de nes a trust region around x. This trust region allows to design an optimal re-entry direction in the space of the dual variables and slacks (the space of the cutting planes): x = 1  (A Y 2AT ) 1a s = AT x; where is the norm of a in the metric of Dikin's ellipsoid, i.e., = qaT (A Y 2AT ) 1a: The expression of this direction in the primal (Karmarkar's) space is: y = Y 2 s = 1  Y 2AT (A Y 2AT ) 1a; this was rst suggested in [27]. Recovering feasibility and centrality The new primal-dual pair ~ s( ) = s 0!+ s (1 )! and ~ y( ) = y0!+ y 1  !; is feasible for 0 < < 1 and nearly centered for some appropriate choice of . (i.e., k ~ x( )~ s( ) e k< 1). Theorem 1 The new analytic center can be recovered in O(1) pure Newton steps (primal algorithm, dual algorithm or primal-dual algorithm). 7 Convergence result for central cuts Assume the solution set X contains a ball of radius x 62 X =) ci hai; xi > , where k ai k= 1. Lemma 2 The number of iterations is bounded by "2 n < 1 2 + 2n log(1 + k 8n2 ) 2n+ k exp 2k 2n+ k : Corollary 3 The total number of calls to an oracle answering central cuts is O (n2 "2 ). The number of iterations with the primal projective algorithm is also O (n2 "2 ). This pseudo-polynomial result appeared in this context in [13]. See also [29] and [1] for related results. Adding an arbitrary deep cut The new cut is ha; xi ha; xi ; with > 0: The primal-dual pair after a move along Dikin's direction is ~ s( ) = s 0!+ s (1 ) ! and ~ y( ) = y0!+ y 1  ! If the cut intersects Dikin's ellipsoid (i.e., < (1 )), the previous reasoning goes through. If the cut lies beyond Dikin's ellipsoid, we cannot recover feasibility in the x-space of the cutting planes. However, from the expression of ~ y( ) we can achieve strict feasibility in the primal space of the primal variable y if is taken small enough. This remark fully justi es the choice of the primal projective algorithm to deal with arbitrary deep cuts. A similar argument holds when introducing multiple cuts. Feasibility in the primal space is restored in one step. Applying the primal projective algorithm with damped Newton step forces convergence towards the center. The primal Newton step can be expressed as e (y)ys(y), where (y) is a scalar and s(y) = c ATx(y), where x(y) results from the least squares computation underlying all interior point methods. When k e (y)ys(y) k< 1 the primal method is in a domain of quadratic convergence and the current y is an approximate center. But we then have s(y) > 0 and s(y) is an approximate center of FD. Using duality on potentials, and the long step arguments of [34], one gets, where and 1 are absolute constants and j is the number of damped Newton steps required to recenter after the addition of the jth cutting plane, the following inequality: "2 n < 1 2 + 2n log(1 + k 8n2 ) 2n+ k exp0@ 2k 2n+ k 2 1 2n+ k k X j=1 j1A : which can be summarized by the following result on the convergence of accpm [16]: Theorem 4 The total number of calls to the oracle is O (n2 "2 ) and the total number of primal (damped) Newton steps is O (n2 "2 log 1" ). 8 Mixing rst and second order information A more re ned approach to the convex constrained optimization minff(y) j y 2 Qg; where Q is a closed bounded convex set with nonempty interior, that take directly into account the nonlinearity of the objective f or of the set Q, should lead to signi cant improvements of the performance of accpm, by accounting fully for some of the nonlinear structure of this problem, and using second order information. A nonlinear version of accpm in the case where f is described by a self{concordant barrier, while Q is given by a separation oracle, was studied in [36]. If, on the other hand, f(y) is convex described by an oracle that returns cutting planes supporting the epigraph, while the set Q admits a known -self-concordant barrier, then a novel approach, the homogeneous cutting plane method, to accpm has been suggested and studied in detail in [32]. In [32], Nesterov and Vial embed the problem into a projective space as follows: de ne X = fx = (y; t) : y = ty ; y 2 Y ; t > 0g ; and the cone K = fx = (y; t) : y = t y; y 2 Q; t > 0g : If H is a -self-concordant barrier for Q, then F (x) = c1H(yt ) c2 ln t: is a self-concordant barrier for cone K. If Q is polyhedral or is a set de ned by convex quadratic functions, the barrier can be given explicitly. Let x = (y; t) 2 intK. De ne y(x) = x=t 2 Q and ĝ(x) = y(x); h y(x); y(x)i ; where u is a subgradient of f(u) at u 2 intQ. Assume that k k L; 8 2 @f(y); y 2 Q, and denote by R any constant such that k y k R for all y 2 Q and by Y the set of the optimal solutions. Theorem 5 For any k 1 we have: min 0 i k 1 f(yi) f Ls 1 k + p + e 3 1 + k [1 +R2]1+ 2k : 4 Implementation and applications 4.1 Some implementation issues The analytic center cutting plane method has been coded and is available from Logilab for academic research1 under the acronym accpm [17]. Some of the implementation issues are: 1Web site http://ecolu-info.unige.ch/logilab 9 Which algorithm to use to compute analytic centers: primal, dual, primal-dual? accpm usesthe primal projective algorithm which is ideally suited for column generation since one cancompute a complexity estimate even when cuts are arbitrarily deep. The primal-dual methodcan be used with moderately deep cut [14]. The implementation [33] in a framework similarto the logarithmic barrier cutting plane method of [20, 21] shows promising performance onnonlinear multicommodity network ow problems.How to choose the initial box? The method assumes that the query point lies within a box. Ifthe user's formulation does not include box constraints, accpm proposes default values. Theuser may also enter guess values; good guesses may signi cantly enhance performance.Adaptive box. accpm possesses a built-in mechanism to enlarge box sides. (See e.g., [19].)This mechanism compensates for wrong guesses on the initial boxes; it turns out to be highlye cient in practice.Column deletion. Iterations become more and more costly as the number of generated columnsincreases. Though highly desirable, column deletion is perilous as it is not well supported2 bythe theory of analytic center cutting planes. Di erent strategies have been tested [6]. Followingthe results on a related method [33], the best strategy in problems with large number of multiplecuts is to perform a drastic deletion only once, say after one third of the estimated total numberof iterations. This strategy is implemented in [17].Weights on some constraints? A weight p on a constraint is equivalent to a p-times repetition ofthe constraint. Instances in which using this feature on the objective is desirable are discussedin [10].Quality of the linear algebra. The direction nding computation requires as much accuracyas possible. To compromise between e ciency and accuracy accpm uses a sparsity exploitingCholesky factorization, with iterative re nements [6, 18, 35].Constructing oracles. The oracles are typically problem dependent and are under the user'sresponsibility. Recent development, tend to automatize this process in the case of large scalestructured problems, e.g., block angular structures. set is the acronym of Structure ExploitingTool [9], a device that is hooked to standard algebraic modeling languages and that allows theuser to pass the information relative to the structure. The use of a decomposition scheme canbe then fully automatized, leaving the user free to use either the standard Kelley-Goldstein-Cheney-Dantzig-Wolfe-Benders scheme or accpm.Additive objective generating multiple cuts. As remarked in [23, 10, 11], a proper formulationin a decomposition approach may have a tremendous impact on the overall computationale ort. In particular, when the objective is additive and when subgradients are computed foreach term of the sum, it is far better to keep the disaggregated information generated byeach individual term rather than aggregating it in a single subgradient (or cut). The latterapproach is usually known as a multiple cuts (or multiple columns) scheme, and is often a mustin practical implementations.The general behavior of accpm can be paraphrased in few sentences. The strong pointsof this algorithm are: robustness and reliability (accpm works with similar speed and pre-dictability on very di erent problems), simplicity (no tuning), stability (insensitive to de-generacy). On the negative side, two factors may severely reduce performance: 1individualiterations are costly, especially if the cuts lie in a large dimensional space; 2the algorithmmay be slow to identify the optimum when the problem is piecewise linear and all pieces2With the exception of [1]; however, this method has not been implemented, and relies on shallow cuts,which would slow down the method.10 have been generated. This last point puts accpm in a weak position with respect to Kelley'sapproach for problems such as the linear multicommodity ow, where the cuts necessary todescribe the optimum are few and quickly generated. Whereas Kelley's strategy stops atonce when all necessary cuts are present3 accpm takes cautious steps to guard against apossible bad behavior of the oracle.4.2 Some applicationsTo conclude, we brie y review a few interesting applications. Nonlinear multicommodity owproblems very naturally lend to a highly disaggregated formulation. Since the cuts (columns)are made up of indicator vectors of paths on a network, the problem is sparse. By exploitingsparsity, accpm could solve [11, 18] extremely large instances, with up to 5000 arcs and/or10000 commodities. The capacity planning for survivable telecommunications networks isformulated in [26, 35] as a very large structured LP that is solved by a two-level decompositionscheme. Those papers provide instances of problems on which the optimal (Kelley's) strategyfails. Stochastic programming and multi-sectorial planning are traditional applications of adecomposition approach. accpm is used in [2] and [3] to solve these problems. Computationof economic equilibria is a very promising area of application for accpm. A recent thesis [4]and the paper [5] give ample evidence of the solving-power of the method on these reputedlydi cult problems.Finally, we would like to mention applications to integer programming. In the rst applica-tion [7], accpm is used to solve a Lagrangian relaxation of the capacitated multi-item lotsizing problem with set-up times. A full integration of accpm in a column generation, orLagrangian relaxation, framework, for structured integer programming problems (clustering,Weber problem with multiple sites [8]), shows that the reliability and robustness of accpmin applications where a nondi erentiable problem must be solved repeatedly (i.e., at everynode of the branch and bound tree) makes it a very powerful alternative to both Kelley'scutting plane method and to subgradient optimization.References[1] D. S. Atkinson and P. M. Vaidya (1995), \A cutting plane algorithm that uses analyticcenters", Mathematical Programming, series B, 69, 1{43.[2] O. Bahn, O. du Merle, J.-L. Go n and J.P. Vial (1995), \A Cutting Plane Method fromAnalytic Centers for Stochastic Programming", Mathematical Programming, Series B,69, 45{73.[3] O. Bahn, A. Haurie, S. Kypreos, and J.P. Vial (1997), Advanced mathematical pro-gramming modeling to assess the bene ts from international co2 abatment coop-eration, Technical Report 97.10, HEC/Management Studies, University of Geneva,Switzerland. To appear in EMA.3Kelley's algorithm, just like the simplex, has the nite termination criterion. However, just as thesimplex|and contrary to accpm and ipm's, it may converge poorly on some problems.11 [4] B. Bueler (1997), Computing Economic Equilibria and its Application to InternationalTrade of Permits: an Agent-Based Approach, PhD Thesis, Swiss Federal Institute ofTechnology, Zurich, Switzerland, .[5] M. Denault and J.-L. Go n (1997), "On a Primal{Dual Analytic Center Cutting PlaneMethod for Variational Inequalities", GERAD Tech. Report G-97-56, 26 pages.[6] O. du Merle (1995), Interior points and cutting planes: development and implemen-tation of methods for convex optimization and large scale structured linear program-ming, Ph.D Thesis, Department of Management Studies, University of Geneva, Geneva,Switzerland, (in French).[7] O. du Merle, J.-L. Go n, C. Trouiller and J.-P. Vial (1997), "A Lagrangian Relaxationof the Capacitated Multi-Item Lot Sizing Problem Solved with an Interior Point Cut-ting Plane Algorithm", Logilab Technical Report 97.5, Department of ManagementStudies, University of Geneva, Switzerland.[8] O. du Merle, P. Hansen, B. Jaumard and N. Mladenovic (1997), "An Interior PointAlgorithm for Minimum Sum of Squares Clustering", GERAD Tech. Report G-97-53,28 pages.[9] E. Fragniere, J. Gondzio, R. Sarkissian and J.-Ph. Vial (1997), Structure exploitingtool in algebraic modeling languages, Logilab Technical Report 97.2, Department ofManagement Studies, University of Geneva, Switzerland.[10] J.-L. Go n, O. du Merle and J.-P. Vial (1996), On the Comparative Behavior of Kel-ley's Cutting Plane Method and the Analytic Center Cutting Plane Method, TechnicalReport 1996.4, Department of Management Studies, University of Geneva, Switzerland,March 1996. To appear in Computational Optimization and Applications .[11] J.-L. Go n, J. Gondzio, R. Sarkissian and J.-P. Vial (1997), \Solving Nonlinear Mul-ticommodity Flows Problems by the Analytic Center Cutting Plane Method", Mathe-matical Programming, Series B, vol 76 1, 131{154.[12] J.-L. Go n, A. Haurie and J.-P. Vial (1992), \Decomposition and nondi erentiableoptimization with the projective algorithm", Management Science 38, 284{302.[13] J.-L. Go n, Z.-Q. Luo and Y. Ye (1996), \Complexity analysis of an interior cuttingplane for convex feasibility problems", SIAM Journal on Optimization, 6, 638{652.[14] J.-L. Go n and F. Shari -Mokhtarian (1994), \Using the primal-dual infeasible New-ton method in the analytic center cutting plane method with deep cuts", Cahiers duGerad G-94-41, Gerad, Montreal. Revised version 1997.[15] J.-L. Go n and J.-P. Vial (1993),\On the Computation of Weighted Analytic Centersand Dual Ellipsoids with the Projective Algorithm", Mathematical Programming 60,81-92.12 [16] J.-L. Go n and J.-P. Vial (1996),\Shallow, deep and very deep cuts in the analyticcenter cutting plane method", Logilab Technical Report 96-3, Department of Man-agement Studies, University of Geneva, Switzerland. Revised June 1997 (to appear inMathematical Programming)[17] J. Gondzio, O. du Merle, R. Sarkissian and J.-P. Vial (1996), \ACCPM A Library forConvex Optimization Based on an Analytic Center Cutting Plane Method", EuropeanJournal of Operational Research, 94, 206{211.[18] J. Gondzio, R. Sarkissian and J.-P. Vial (1997), Using an Interior Point Method forthe Master Problem in a Decomposition Approach, European Journal of OperationalResearch, 101, 577{587.[19] J. Gondzio and J.-P. Vial (1997), Warm Start and epsilon-subgradients in the Cut-ting Plane Scheme for Block-angular Linear Programs, Logilab Technical Report 97.1,Department of Management Studies, University of Geneva, Switzerland. To appear inComputational Optimization and Applications.[20] D. den Hertog, J. Kaliski, C. Roos and T. Terlaky (1995), \A logarithmic barriercutting plane method for convex programming", Annals of Operations Research 58,69{98.[21] D. den Hertog, C. Roos and T. Terlaky (1992), \A build-up variant of the logarithmicbarrier method for LP", Operations Research Letters 12, 181{186.[22] P. Huard (1967), \Resolution of Mathematical Programming with Nonlinear Con-straints by the Method of Centers", in J. Abadie, editor, Nonlinear Programming,pages 207{219. North Holland, Amsterdam, The Netherlands.[23] K.L. Jones, I.J. Lustig, J.M. Farvolden and W.B. Powell, Multicommodity networkows: the impact of formulation on decomposition (1993), Mathematical Programming,volume 62, 95{117.[24] N.K. Karmarkar (1984), \A New Polynomial{Time Algorithm for Linear Program-ming", Combinatorica 4, 373{395.[25] C. Lemarechal, A. Nemirovskii, Y. Nesterov (1991), \New Variants of Bundle Meth-ods", Rapports de Recherche INRIA No 1508, INRIA-Rocquencourt, alsoMathematicalProgramming 69 (1995), 111{147.[26] A. Lisser, R. Sarkissian and J.-P. Vial (1995), Optimal joint synthesis of base and re-serve telecommunication networks, CNET Technical Note NT/PAA/ATR/ORI/4491,France Telecom, Issy-les-Moulineaux,France.[27] J. E. Mitchell and M.J. Todd (1992), \Solving combinatorial optimization problemsusing Karmarkar's algorithm," Mathematical Programming 56, 245{284.[28] A.S. Nemirovskii and D.B. Yudin (1983), "Problem Complexity and Method E ciencyin Optimization", John Wiley, Chichester.13 [29] Y. Nesterov (1995), \Cutting plane algorithms from analytic centers: e ciency esti-mates", Mathematical Programming, series B, 69, 149{176.[30] Y. Nesterov (1996), "Introductory Lectures on Convex programming", Lecture Notes,CORE, Belgium.[31] Y. Nesterov and A.S. Nemirovskii (1994), Interior-Point Polynomial Algorithms inConvex Programming (SIAM, Philadelphia).[32] Y. Nesterov, J.-Ph. Vial (1997), \Homogeneous analytic center cutting plane methodsfor convex problems and variational inequalities", Logilab Technical Report 1997.4.[33] Gondzio J. and R. Sarkissian (1996), \Column Generation with a Primal-DualMethod", Logilab Technical Report 96.6, Department of Management Studies, Uni-versity of Geneva, Switzerland.[34] C. Roos et J.-P. Vial (1990), "Long steps with the logarithmic barrier function in linearprogramming", in: Economic Decision Making: Games, Econometrics and Optimiza-tion, J. Gabszewicz, J.-F. Richard and L. Wolsey ed., Elsevier Science Publishers B.V., pp 433-441.[35] R. Sarkissian (1997), Telecommunications Networks: Routing and Survivability Op-timization Using a Central Cutting Plane Method, PhD thesis, Ecole PolytechniqueFederale, Lausanne, Switzerland.[36] F. Shari -Mokhtarian and J.L. Go n (1996), \A Nonlinear Analytic Center CuttingPlane Method for a Class of Convex Programming Problems," Technical Report G-96-53, Groupe d' Etudes et de Recherche en Analyse des Decision, Universite de Montreal,Canada; also: SIAM Journal on Optimization, to appear in revised version.[37] G. Sonnevend (1986), \New algorithms in convex programming based on a notion of\center" (for systems of analytic inequalities) and on rational extrapolation", in: K. H.Ho mann, J. B. Hiriart-Urruty, C. Lemarechal, and J. Zowe, eds., Trends in Math-ematical Optimization: Proceedings of the 4th French{German Conference on Opti-mization (Irsee, West{Germany); also: International Series of Numerical Mathematics84 (Birkhauser Verlag, Basel, Switzerland, 1988) 311{327.[38] Vial, J.-P. (1996), \A generic path-following algorithm with a sliding constraint and itsapplication to linear programming and the computation of analytic centers", TechnicalReport 1996.8, LOGILAB/Management Studies, University of Geneva, Switzerland.[39] P. Wolfe (1975), \A Method of Conjugate Subgradients for Minimizing Nondi eren-tiable Functions ", Mathematical Programming Study 3, pp 145{173.[40] Y. Ye (1992), \A potential reduction algorithm allowing column generation", SIAMJournal on Optimization 2, 7{20.14

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A full Nesterov-Todd step interior-point method for circular cone optimization

In this paper, we present a full Newton step feasible interior-pointmethod for circular cone optimization by using Euclidean Jordanalgebra. The search direction is based on the Nesterov-Todd scalingscheme, and only full-Newton step is used at each iteration.Furthermore, we derive the iteration bound that coincides with thecurrently best known iteration bound for small-update methods.

متن کامل

A path following interior-point algorithm for semidefinite optimization problem based on new kernel function

In this paper, we deal to obtain some new complexity results for solving semidefinite optimization (SDO) problem by interior-point methods (IPMs). We define a new proximity function for the SDO by a new kernel function. Furthermore we formulate an algorithm for a primal dual interior-point method (IPM) for the SDO by using the proximity function and give its complexity analysis, and then we sho...

متن کامل

An Interior Point Algorithm for Solving Convex Quadratic Semidefinite Optimization Problems Using a New Kernel Function

In this paper, we&nbsp;consider convex quadratic semidefinite optimization problems and&nbsp;provide a primal-dual Interior Point Method (IPM) based on a new&nbsp;kernel function with a trigonometric barrier term. Iteration&nbsp;complexity of the algorithm is analyzed using some easy to check&nbsp;and mild conditions. Although our proposed kernel function is&nbsp;neither a Self-Regular (SR) fun...

متن کامل

A New Infeasible Interior-Point Algorithm with Full Nesterov-Todd Step for Semi-Definite Optimization

  We present a new full Nesterov and Todd step infeasible interior-point algorithm for semi-definite optimization. The algorithm decreases the duality gap and the feasibility residuals at the same rate. In the algorithm, we construct strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. Every main iteration of the algorithm consists of a feasibili...

متن کامل

An infeasible interior-point method for the $P*$-matrix linear complementarity problem based on a trigonometric kernel function with full-Newton step

An infeasible interior-point algorithm for solving the$P_*$-matrix linear complementarity problem based on a kernelfunction with trigonometric barrier term is analyzed. Each (main)iteration of the algorithm consists of a feasibility step andseveral centrality steps, whose feasibility step is induced by atrigonometric kernel function. The complexity result coincides withthe best result for infea...

متن کامل

Vehicle Interior Vibration Simulation-a Tool for Engine Mount Optimization

By new advancements in vehicle manufacturing vehicle quality evaluation and assurance has become a more critical issue. In present work, the vibration transfer path analysis and vibration path ranking of a car interior has been performed. The method is similar to classical multilevel TPA methods but has distinct differences. The method is named VIVS which stands for Vehicle Interior Vibratio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998